146 research outputs found
Attend and Interact: Higher-Order Object Interactions for Video Understanding
Human actions often involve complex interactions across several inter-related
objects in the scene. However, existing approaches to fine-grained video
understanding or visual relationship detection often rely on single object
representation or pairwise object relationships. Furthermore, learning
interactions across multiple objects in hundreds of frames for video is
computationally infeasible and performance may suffer since a large
combinatorial space has to be modeled. In this paper, we propose to efficiently
learn higher-order interactions between arbitrary subgroups of objects for
fine-grained video understanding. We demonstrate that modeling object
interactions significantly improves accuracy for both action recognition and
video captioning, while saving more than 3-times the computation over
traditional pairwise relationships. The proposed method is validated on two
large-scale datasets: Kinetics and ActivityNet Captions. Our SINet and
SINet-Caption achieve state-of-the-art performances on both datasets even
though the videos are sampled at a maximum of 1 FPS. To the best of our
knowledge, this is the first work modeling object interactions on open domain
large-scale video datasets, and we additionally model higher-order object
interactions which improves the performance with low computational costs.Comment: CVPR 201
Structure-Encoding Auxiliary Tasks for Improved Visual Representation in Vision-and-Language Navigation
In Vision-and-Language Navigation (VLN), researchers typically take an image
encoder pre-trained on ImageNet without fine-tuning on the environments that
the agent will be trained or tested on. However, the distribution shift between
the training images from ImageNet and the views in the navigation environments
may render the ImageNet pre-trained image encoder suboptimal. Therefore, in
this paper, we design a set of structure-encoding auxiliary tasks (SEA) that
leverage the data in the navigation environments to pre-train and improve the
image encoder. Specifically, we design and customize (1) 3D jigsaw, (2)
traversability prediction, and (3) instance classification to pre-train the
image encoder. Through rigorous ablations, our SEA pre-trained features are
shown to better encode structural information of the scenes, which ImageNet
pre-trained features fail to properly encode but is crucial for the target
navigation task. The SEA pre-trained features can be easily plugged into
existing VLN agents without any tuning. For example, on Test-Unseen
environments, the VLN agents combined with our SEA pre-trained features achieve
absolute success rate improvement of 12% for Speaker-Follower, 5% for
Env-Dropout, and 4% for AuxRN
Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks
Adapting large-scale pretrained models to various downstream tasks via
fine-tuning is a standard method in machine learning. Recently,
parameter-efficient fine-tuning methods show promise in adapting a pretrained
model to different tasks while training only a few parameters. Despite their
success, most existing methods are proposed in Natural Language Processing
tasks with language Transformers, and adaptation to Computer Vision tasks with
Vision Transformers remains under-explored, especially for dense vision tasks.
Further, in multi-task settings, individually fine-tuning and storing separate
models for different tasks is inefficient. In this work, we provide an
extensive multi-task parameter-efficient benchmark and examine existing
parameter-efficient fine-tuning NLP methods for vision tasks. Our results on
four different dense vision tasks showed that existing methods cannot be
efficiently integrated due to the hierarchical nature of the Hierarchical
Vision Transformers. To overcome this issue, we propose Polyhistor and
Polyhistor-Lite, consisting of Decomposed HyperNetworks and Layer-wise Scaling
Kernels, to share information across different tasks with a few trainable
parameters. This leads to favorable performance improvements against existing
parameter-efficient methods while using fewer trainable parameters.
Specifically, Polyhistor achieves competitive accuracy compared to the
state-of-the-art while only using ~10% of their trainable parameters.
Furthermore, our methods show larger performance gains when large networks and
more pretraining data are used.Comment: Accepted to NeurIPS 2022; Project Page is at
https://ycliu93.github.io/projects/polyhistor.htm
- …